nuclear weapon
AI Is Here to Replace Nuclear Treaties. Scared Yet?
AI Is Here to Replace Nuclear Treaties. The last major nuclear arms treaty between the US and Russia just expired. Some experts believe a combination of satellite surveillance, AI, and human reviewers can take its place. For half a century, the world's nuclear powers relied on an intricate and complex series of treaties that slowly and steadily reduced the number of nuclear weapons on the planet. Those treaties are gone now, and it doesn't appear that they'll be coming back anytime soon.
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.35)
US and Iran agree to hold nuclear talks in Oman on Friday
The US and Iran have agreed to hold nuclear talks in Oman on Friday, as President Donald Trump issued a blunt warning to Supreme Leader Ayatollah Ali Khamenei. Iranian Foreign Minister Abbas Araghchi said that the meeting would start at 10:00 (06:00 GMT) in Muscat. US officials also confirmed it would happen there. The talks had appeared to be in jeopardy, with the two countries at odds over the location and parameters. Trump has built up US forces in the region and threatened military action if Iran does not agree a deal on its nuclear programme and stop killing protesters.
- Asia > Middle East > Oman > Muscat Governorate > Muscat (0.25)
- North America > Central America (0.15)
- Asia > Middle East > Israel (0.07)
- (17 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Asia Government > Middle East Government > Iran Government (1.00)
- Government > Military (1.00)
- Government > Foreign Policy (1.00)
Artificial Intelligence and Nuclear Weapons Proliferation: The Technological Arms Race for (In)visibility
Allison, David M., Herzog, Stephen
A robust nonproliferation regime has contained the spread of nuclear weapons to just nine states. Yet, emerging and disruptive technologies are reshaping the landscape of nuclear risks, presenting a critical juncture for decision makers. This article lays out the contours of an overlooked but intensifying technological arms race for nuclear (in)visibility, driven by the interplay between proliferation-enabling technologies (PETs) and detection-enhancing technologies (DETs). We argue that the strategic pattern of proliferation will be increasingly shaped by the innovation pace in these domains. Artificial intelligence (AI) introduces unprecedented complexity to this equation, as its rapid scaling and knowledge substitution capabilities accelerate PET development and challenge traditional monitoring and verification methods. To analyze this dynamic, we develop a formal model centered on a Relative Advantage Index (RAI), quantifying the shifting balance between PETs and DETs. Our model explores how asymmetric technological advancement, particularly logistic AI-driven PET growth versus stepwise DET improvements, expands the band of uncertainty surrounding proliferation detectability. Through replicable scenario-based simulations, we evaluate the impact of varying PET growth rates and DET investment strategies on cumulative nuclear breakout risk. We identify a strategic fork ahead, where detection may no longer suffice without broader PET governance. Governments and international organizations should accordingly invest in policies and tools agile enough to keep pace with tomorrow's technology.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > North Korea (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.14)
- (18 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Case-Based Reasoning (0.46)
Poems Can Trick AI Into Helping You Make a Nuclear Weapon
It turns out all the guardrails in the world won't protect a chatbot from meter and rhyme. You can get ChatGPT to help you build a nuclear bomb if you simply design the prompt in the form of a poem, according to a new study from researchers in Europe. The study, Adversarial Poetry as a Universal Single-Turn Jailbreak in Large Language Models (LLMs)," comes from Icaro Lab, a collaboration of researchers at Sapienza University in Rome and the DexAI think tank. According to the research, AI chatbots will dish on topics like nuclear weapons, child sex abuse material, and malware so long as users phrase the question in the form of a poem. "Poetic framing achieved an average jailbreak success rate of 62 percent for hand-crafted poems and approximately 43 percent for meta-prompt conversions," the study said. The researchers tested the poetic method on 25 chatbots made by companies like OpenAI, Meta, and Anthropic . It worked, with varying degrees of success, on all of them. WIRED reached out to Meta, Anthropic, and OpenAI for a comment but didn't hear back. The researchers say they've reached out as well to share their results. AI tools like Claude and ChatGPT have guardrails that prevent them from answering questions about "revenge porn" and the creation of weapons-grade plutonium. But it's easy to confuse those guardrails by adding " adversarial suffixes " to a prompt. Basically, add a bunch of extra junk to a question and it confuses the AI and bypasses its safety systems. The poetry jailbreak is similar. "If adversarial suffixes are, in the model's eyes, a kind of involuntary poetry, then real human poetry might be a natural adversarial suffix," the team at Icaro Lab, the researchers behind the poetry jailbreak, tell WIRED. "We experimented by reformulating dangerous requests in poetic form, using metaphors, fragmented syntax, oblique references.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Government > Military (1.00)
- Information Technology (0.88)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
- North America > United States > Texas (0.04)
- North America > United States > North Carolina (0.04)
- North America > United States > Maine (0.04)
- North America > United States > California (0.04)
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection--or a protection at all. At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn't help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn't spill nuclear secrets.
- Asia > North Korea (0.14)
- Pacific Ocean (0.04)
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.04)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.88)
Human-level AI is not inevitable. We have the power to change course Garrison Lovely
"Technology happens because it is possible," OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had.
- Asia > Russia (0.15)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- Government > Military (1.00)
- Energy (1.00)
What Israel's attack on Iran means for the future of war
In the predawn darkness of June 13, Israel launched a "preemptive" attack on Iran. Explosions rocked various parts of the country. Among the targets were nuclear sites at Natanz and Fordo, military bases, research labs, and senior military residences. By the end of the operation, Israel had killed at least 974 people while Iranian missile strikes in retaliation had killed 28 people in Israel. Israel described its actions as anticipatory self-defence, claiming Iran was mere weeks away from producing a functional nuclear weapon.
- Asia > Middle East > Israel (1.00)
- Europe > Germany (0.15)
- North America > United States (0.05)
- (4 more...)
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.31)
Who will launch nukes first amid WW3 fears, according to experts
As fears of all-out nuclear war intensify, scientists are sounding the alarm that the decision to launch a catastrophic strike could soon rest not with world leaders, but with a machine. In a stark warning, the Stockholm International Peace Research Institute (SIPRI), an independent group that monitors global security issues, reported that the decades-long decline in global nuclear arsenals has come to an end. Instead, nations are now modernizing, expanding, and deploying their stockpiles at a rapid and alarming pace, signaling the onset of a new, high-tech arms race. While AI and similar technologies can accelerate decision-making during crises, scientists warn they also raise the risk of nuclear conflict through miscommunication, misunderstanding, or technical failure, the report stated. In a nuclear standoff, decision-makers often have only minutes to assess threats and respond.
- Asia > Middle East > Iran (0.45)
- Asia > Middle East > Israel (0.44)
- North America > United States (0.31)
- (8 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.31)
AI could spark nuclear Armageddon and World War Three, experts fear
Artificial intelligence could spark an accidental nuclear war, conflict experts fear. The Stockholm International Peace Research Institute (SIPRI), the world's leading organisation on nuclear assessments, said technologies like AI are aggravating the risk carried with growing global nuclear stockpiles. SIPRI pointed to China's rapidly growing stockpile, from 500 to 600 in a single year, as well as the imminent expiry of the final arms control treaty between the US and Russia, two nuclear-armed nations. The institute's director, Dan Smith, warned: 'One component of the coming arms race will be the attempt to gain and maintain a competitive edge in artificial intelligence (AI), both for offensive and defensive purposes. 'There are benefits to be found but the careless adoption of AI could significantly increase nuclear risk.'
- North America > United States (0.33)
- Europe > France (0.32)
- Europe > Russia (0.27)
- (12 more...)